End-to-End Learning to Grasp via Sampling From Object Point Clouds

نویسندگان

چکیده

The ability to grasp objects is an essential skill that enables many robotic manipulation tasks. Recent works have studied point cloud-based methods for object grasping by starting from simulated datasets and shown promising performance in real-world scenarios. Nevertheless, of them still rely on ad-hoc geometric heuristics generate candidates, which fail generalize with significantly different shapes respect those observed during training. Several approaches exploit complex multi-stage learning strategies local neighborhood feature extraction while ignoring semantic global information. Furthermore, they are inefficient terms number training samples time required inference. In this letter, we propose end-to-end solution 6-DOF parallel-jaw grasps the 3D partial view object. Our Learning Grasp (L2G) method gathers information input cloud through a new procedure combines differentiable sampling strategy identify visible contact points, encoder leverages cues. Overall, L2G guided multi-task objective generates diverse set optimizing sampling, regression, classification. With thorough experimental analysis, show effectiveness as well its robustness generalization abilities.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning to grasp from point clouds

We study how to encode the local information of the point clouds in such a way that a robot can learn by experimentation the graspability of objects. After learning, the robot should be able to predict the graspability of unknown objects. We consider two well known descriptors in the computer vision community: Spin images and shape context. In addition, we consider two recent and efficient desc...

متن کامل

End-to-end Active Object Tracking via Reinforcement Learning

In this paper we propose an active object tracking approach, which provides a tracking solution simultaneously addressing tracking and camera control. Crucially, these two tasks are tackled in an end-to-end manner via reinforcement learning. Specifically, a ConvNet-LSTM function approximator is adopted, which takes as input only visual observations (i.e., frame sequences) and directly outputs c...

متن کامل

An End-to-End Approach to Natural Language Object Retrieval via Context-Aware Deep Reinforcement Learning

We propose an end-to-end approach to the natural language object retrieval task, which localizes an object within an image according to a natural language description, i.e., referring expression. Previous works divide this problem into two independent stages: first, compute region proposals from the image without the exploration of the language description; second, score the object proposals wi...

متن کامل

End-to-end esophagojejunostomy versus standard end-to-side esophagojejunostomy: which one is preferable?

 Abstract Background: End-to-side esophagojejunostomy has almost always been associated with some degree of dysphagia. To overcome this complication we decided to perform an end-to-end anastomosis and compare it with end-to-side Roux-en-Y esophagojejunostomy. Methods: In this prospective study, between 1998 and 2005, 71 patients with a diagnosis of gastric adenocarcinoma underwent total gastrec...

متن کامل

VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection

Accurate detection of objects in 3D point clouds is a central problem in many applications, such as autonomous navigation, housekeeping robots, and augmented/virtual reality. To interface a highly sparse LiDAR point cloud with a region proposal network (RPN), most existing efforts have focused on hand-crafted feature representations, for example, a bird’s eye view projection. In this work, we r...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE robotics and automation letters

سال: 2022

ISSN: ['2377-3766']

DOI: https://doi.org/10.1109/lra.2022.3191183